British Journal of Anaesthesia
○ Elsevier BV
Preprints posted in the last 30 days, ranked by how well they match British Journal of Anaesthesia's content profile, based on 13 papers previously published here. The average preprint has a 0.14% match score for this journal, so anything above that is already an above-average fit.
Hopkins, P.; Aboelsaod, E. M.; Daly, C.; Fisher, N.; Hobson, S. J.; Garland, H.; Gupta, P. K.; Bilmen, J. G.; Shepherd, S.; Robinson, R. L.; Shaw, M.-A.
Show abstract
BackgroundThere is disparity between the incidence of malignant hyperthermia (MH) reactions and the prevalence of variants in the RYR1 gene associated with susceptibility to MH (where susceptibility is determined by in vitro contracture tests). Our aims were to use clinical and genetic data from the UK to explain this disparity and to examine if these data are consistent with the clinical risk of MH being inherited as an autosomal dominant trait. MethodsClinical MH and genotyping data were extracted from the UK MH registry. The numbers of general anaesthetics delivered in the UK were estimated from national surveys and reports, with population data obtained from government statistics. The prevalence of RYR1 variants in the UK population was estimated using UK Biobank data. The incidence of MH reactions 1988-93 was used to estimate the prevalence of the clinical risk of MH in the UK. Bayesian modelling, calibrated against actual data, was used to evaluate the likely mode of inheritance of the clinical risk of MH and the relative risk of clinical MH associated with different RYR1 variants. ResultsThe probability of index cases developing MH with each general anaesthetic can be expressed as a constant hazard of 0.46 (95% CI 0.42 - 0.50, n=375). We used peak incidence data (1988-93) to estimate the prevalence of the risk of MH as 1 in 44,000 (95% credibility interval, 1 in 40,000 to 1 in 48,000). The incidence of MH has declined over the past 22 years but the rate of decline is inconsistent with autosomal dominant inheritance (P < 10-10). The risk of MH varied by up to 150-fold between carriers of 28 recurrent RYR1 variants. ConclusionThese findings support a threshold inheritance model for clinical MH and have implications for diagnostics, both genotyping and in vitro contracture test phenotyping.
Ershoff, B. D.
Show abstract
BackgroundPropofol dosing guidelines recommend age-based reductions because hypnotic sensitivity increases in older adults. Most real-world evaluations of induction practice, however, have relied on total weight-normalized dose (mg/kg) rather than estimating cerebral exposure using pharmacokinetic models. Because age-related pharmacokinetic changes alter the relationship between administered dose and peak effect-site concentration (Ce,max), mg/kg surrogates may misrepresent true age-dependent exposure during induction. MethodsA retrospective reconstruction of 250,640 adult anesthetic inductions was performed using high-fidelity EHR medication timestamps. Propofol effect-site concentration trajectories were simulated at 1-second resolution using the Eleveld model. Ce,maxwas benchmarked against age-adjusted hypnotic requirements (Ce50) derived from the Eleveld model (standardized to a target Bispectral Index{approx} 47). Age-exposure relationships were estimated using covariate-adjusted natural cubic splines, controlling for BMI, sex, and ASA physical status. ResultsFrom young adulthood (18-24 years) to the oldest cohort (85-89 years), weight-normalized induction doses were reduced by 32% (3.16 to 2.16 mg/kg). However, modeled Ce,max declined by only 17% (3.70 to 3.06 {micro}g/mL), while the estimated physiological requirement declined by 34% (3.37 to 2.21 {micro}g/mL), creating a widening titration offset with age. At age 75, the adjusted probability of exceeding the individual hypnotic requirement was 89.6% (95% CI: 89.3-89.8%). Notably, 54.7% (95% CI: 54.2-55.2%) of 75-year-old patients achieved peak exposures exceeding the aver-age requirement of a healthy 20-year-old, indicating persistent anchoring of exposure to youthful levels. Findings were robust across model specifications and inclusion criteria. ConclusionsIn over a quarter-million inductions, real-world age-based dose re-ductions did not produce proportional reductions in peak propofol brain exposure. Achieved concentrations declined far more slowly than modeled geriatric sensitivity increases, consistent with systematic over-exposure in older adults. These findings suggest that weight-based dosing heuristics inadequately capture age-dependent exposure and support a transition toward exposure-informed and neurophysiologically guided induction titration in geriatric anesthesia.
Morisson, L.; Latreille, A.; Pietrancosta, M.; Djerroud, K.; Tanoubi, I.; Hemmerling, T.; Laferriere-Langlois, P.
Show abstract
Purpose To quantify and compare the peak force applied on the glottis during endotracheal intubation across five laryngoscopy techniques, two intubation conditions (standard and simulated laryngospasm), and two operator experience levels, and to assess the effects of stylet use and operator anthropometric characteristics on applied force. Methods This prospective, manikin-based experimental study enrolled 50 operators (30 experienced, 20 less experienced). Each performed endotracheal intubation using five techniques: direct laryngoscopy and videolaryngoscopy with a Macintosh blade, each with and without stylet, and videolaryngoscopy with a hyperangulated blade with stylet. A calibrated force sensor positioned at the glottis measured peak forces during standard and simulated laryngospasm conditions. Non-parametric statistical methods were used (Mann-Whitney U, Wilcoxon signed-rank, Friedman tests); effect sizes are reported as rank-biserial correlations. Results Across all techniques, median glottic forces ranged from 4.8 N (IQR: 3.3-6.5) for videolaryngoscopy without stylet to 11.1 N (IQR: 7.5-14.5) for direct laryngoscopy with stylet under standard conditions. No significant differences in applied force were observed between experienced and less experienced operators for any technique-condition combination (all adjusted p = 1.0; |r| < 0.27). Stylet use significantly increased glottic force across all conditions and groups (median increases 3.4-7.3 N; all p < 0.001; rank-biserial r > 0.75). Videolaryngoscopy with a Macintosh blade produced significantly lower forces than hyperangulated videolaryngoscopy under standard conditions (adjusted p = 0.049). Neither grip strength nor hand size correlated with applied force. Conclusion Glottic force during endotracheal intubation is determined primarily by technique and stylet use, not operator experience or anthropometrics. Stylet use is the single largest modifiable contributor to glottic force. These findings have implications for device selection, clinical training, and strategies to minimize airway trauma during intubation.
Chorney, W.; Lisi, M.
Show abstract
BackgroundPostoperative delirium (POD) is a common complication of surgery. It is associated with a number of detrimental effects, including mortality and healthcare costs. We sought to determine whether common comorbidity indices are predictors of POD. MethodsUsing the Medical Information Mart for Intensive Care (MIMIC)-IV database, we identified 8022 abdominal surgery procedures across 7212 adult patients. We calculated both the Charlson comorbidity index (CCI) and the Elixhauser comorbidity index (ECI) for each procedure and used logistic regression to predict postoperative delirium, which was defined as delirium within 30 days following the procedure. ResultsModels based on either the CCI and ECI were predictive of postoperative delirium (area under the receiver-operator characteristic curve (AUC-ROC) of 0.622 and 0.652, respectively). However, the addition of other factors known to be associated with delirium improved model performance (AUC-ROC of 0.680). ConclusionsBoth the CCI and ECI are predictors of postoperative delirium in patients undergoing abdominal surgery. Addition of factors known to be associated with delirium renders additional predictive value and should be included in models that predict postoperative delirium.
Bitewlign, M. Z.; Gemeda, L. A.; Delile, S. T.; Seife, M. A.; Zeleke, M. E.; Gebrewahd, T. H.; Gebreslase, L. G.; Tesfagergse, Y. T.
Show abstract
Background: Cesarean section is one of the most commonly performed surgical procedures worldwide and is frequently associated with moderate to severe postoperative pain. While overall pain after cesarean delivery is well described, evidence comparing pain intensity and analgesic use between primary and repeat cesarean sections remains limited. Objective: To compare postoperative pain severity and total analgesic consumption within the first 24 hours among women undergoing primary versus repeat cesarean sections under spinal anesthesia at Tikur Anbessa Specialized Hospital, Addis Ababa, Ethiopia, from January 1 to March 30, 2025. Methods: A prospective cohort study was conducted among 203 women who underwent cesarean delivery under spinal anesthesia. Participants were selected using systematic random sampling and categorized into primary and repeat cesarean groups. Demographic and clinical characteristics were summarized using descriptive statistics. Group comparisons were performed using independent t-tests or Mann-Whitney U tests for continuous variables and Chi-square tests for categorical variables. A p-value < 0.05 was considered statistically significant. Results: Women undergoing repeat cesarean sections experienced significantly higher postoperative pain. During movement, 92.1% of women in the repeat group reported moderate to severe pain compared with 66.7% in the primary group (p < 0.001). At rest, moderate to severe pain occurred in 74.3% of the repeat group versus 52.9% of the primary group (p = 0.002). Pain scores within the first 6 hours were also higher in the repeat group (median NRS 7, IQR 7-8) than in the primary group (median NRS 5, IQR 4-7; p < 0.001). Total analgesic consumption was significantly greater among women in the repeat group (243.3 {+/-} 98.4 mg) compared with the primary group (146.3 {+/-} 82.5 mg; p < 0.001). Conclusions: Repeat cesarean sections are associated with higher early postoperative pain and increased analgesic requirements. These findings support the need for individualized and intensified pain management strategies for women undergoing repeat cesarean delivery. Clinical trial number : Not applicable
Chorney, W.; Lisi, M.
Show abstract
BackgroundPostoperative delirium is a common complication in surgical patients, and is associated with a multitude of negative outcomes, including mortality, dementia, and increased healthcare costs. Therefore, a better understanding of what factors contribute to postoperative delirium, especially those that can be easily obtained, is important. MethodsWe conducted a retrospective cohort study using patients from the Medical Information Mart for Intensive Care (MIMIC)-IV database. Adult patients undergoing procedures in abdominal surgery who did not have pre-existing delirium were included in the study. Overall, we included 8022 procedures across 7212 patients. For each admission, we extracted values obtained from common blood tests, the Charlson and Elixhauser comorbidity score, and patient demographic information. We used stepwise logistic regression to identify predictive factors of postoperative delirium in this cohort. ResultsThe model isolated factors well known to be associated with postoperative delirium, such as age, comorbidity (as represented by the Elixhauser comorbidity score), and Parkinsons disease. The model also selected variables that are less studied, such as minimum preoperative platelets and maximum preoperative sodium levels. We hypothesize that the former is associated with postoperative delirium as a surrogate marker for inflammation as an acute phase reactant, and the second due to it being a marker for cerebral edema and altered neurotransmission. ConclusionPreoperative blood tests contain valuable information that can be used alongside patient demographics and past medical history to better predict the risk of postoperative delirium.
Rockholt, M. M.; Wu, R. R.; Seidenberg, B.; Martinez, H.; Momesso, G.; Zhu, E.; Saba, B. v.; Perez, R.; Bi, C.; Park, W.; Bruno, G.; Waren, D.; O'Brien, C.; Denoon, R. B.; Commeh, E. B.; Aggarwal, V. K.; Rozell, J. C.; Furgiuele, D.; Park, H. G.; Schulze, E. T.; Macaulay, W.; Schwarzkopf, R.; Wisniewski, T.; Osorio, R. S.; Doan, L. v.; Wang, J.
Show abstract
INTRODUCTIONRisks for postoperative cognitive dysfunction remain poorly understood. Traditional cognitive screening tools such as the Montreal Cognitive Assessment (MoCA) and the Mini-Mental State Examination (MMSE) are used for perioperative cognitive evaluation but have limited scope, whereas comprehensive in-person testing poses problems for long-term follow up. METHODSThis prospective cohort study assesses the feasibility of using a remotely performed comprehensive neurocognitive test battery, the Uniform Data Set tele-adapted neuropsychological battery version 3 (UDS v3.0 T-cog), administered at baseline and 1 week, 1 month, and 3 months postoperatively, to comprehensively study neurocognitive outcomes in older adults undergoing orthopedic joint arthroplasty. Patient satisfaction with T-cog was assessed through four survey questions evaluating technical issues, duration, willingness to participate in in-person assessment, and satisfaction with remote assessment at 3 months after surgery. Further assessment of pain and mood also included PROMIS scales, McGill Pain Questionnaire, and Pain Catastrophizing Scale, before and 3 months after surgery. RESULTS127 participants were enrolled, and out of 120 participants who completed baseline cognitive assessment and underwent surgery, 98 completed cognitive assessments at 3 months. At 3 months, 17% of participants showed an objective decline in cognitive function based on this comprehensive assessment. The remote assessment format was well-received with high participant satisfaction. The UDS v3.0 T-cog identified deficits in specific domains that would have been missed by brief screening instruments, supporting its values for perioperative use. DISCUSSIONThis is the first study to utilize this comprehensive remote cognitive assessment tool to study long-term cognitive function. The assessment can be combined with other preoperative outcome assessments in older adults undergoing surgery. HighlightsO_LICurrent detection of perioperative cognitive outcomes in older adults rely on in-person cognitive assessments that are varied in methodology and often lack sensitivity and specificity C_LIO_LIThe UDS v3.0 T-cog identified objective cognitive decline in 17% of patients after orthopedic arthroplasty while also detecting early non-memory cognitive decline through the more comprehensive test battery with high participant satisfaction and retention, supporting remote assessment feasibility. C_LIO_LIThese findings suggest that remote comprehensive cognitive assessments are an effective tool to provide early detection and risk stratification for perioperative neurocognitive dysfunction in older patients. C_LI
Wolosker, M. B.; Tedde, M. L.; Noro Hamilton, N.; Wolosker, N.; Schmidt Aguiar, W. W.; da Costa Ferreira, H. P.; Westphal, F. L.; Rodrigues Lima, A. M.; de Oliveira, H. A.; L F Pereira, S. T.; de Oliveira Riuto, F.; C Resende, G.; Krum Brenner, M. M.; Bonomi, D. d. O.; Brero Valero, C. E.; pego fernandes, P. m.
Show abstract
2- AbstractO_ST_ABSOBJECTIVEC_ST_ABSTo compare, in a Brazilian population, the clinical efficacy and quality-of-life (QoL) impact of one-stage bilateral thoracic sympathectomy (BTS) versus unilateral sympathectomy on the dominant side (UniS), with additional analysis of patients who later underwent contralateral surgery (two-stage bilateral, 2stS). METHODSProspective, randomized, controlled, multicenter trial (11 centers) including 163 adults with primary palmar hyperhidrosis. Participants were randomized 1:1 to BTS or UniS. From 6 months onward, UniS patients could elect contralateral sympathectomy (2stS). Sweating severity was assessed using the Hyperhidrosis Disease Severity Scale (HDSS) across 18 anatomical sites at each visit. Compensatory sweating (CS) was defined as new sweating in previously unaffected areas (preoperative HDSS = 1) and graded by the magnitude of HDSS increase. QoL was measured with two complementary validated instruments: HidroQOL and the Horn questionnaire. RESULTSBaseline characteristics were similar between groups, with most participants presenting severe preoperative disease. Improvement in the operated (dominant) hand was comparable after BTS and UniS, whereas control of the non-operated hand favored BTS. In the UniS group, spontaneous contralateral improvement occurred in approximately one-seventh of untreated hands. The proportion of patients without CS was similar in both groups ([~]25%), but severe CS was more frequent after BTS (40.4% vs 21.0%, p = 0.0344). QoL improved in both groups, with larger and more sustained reductions in Horn and HidroQOL scores after BTS (p < 0.001). In the 2stS subgroup, contralateral surgery produced a consistent HDSS decrease and marked QoL improvement, with predominantly mild additional CS. CONCLUSIONSBTS provides more complete symptom control and greater QoL improvement, but at the cost of more severe CS. UniS offers excellent control on the treated side, may reduce severe CS, and supports a staged strategy in which some patients avoid a second procedure (requested by 22.5% in this study); when needed, contralateral completion tends to restore additional clinical and QoL gains.
Armenta Salas, M.; Zhang, A.; Girard, T. D.; Devlin, J. W.; Barr, J.
Show abstract
BACKGROUNDDelirium is common in critically ill adults but often goes unrecognized and undertreated. Little is known about the perceptions of ICU nurse and physician leaders regarding ICU delirium detection and management and the potential role of objective continuous delirium monitoring to facilitate ICU delirium care. RESEARCH QUESTIONWhat are the perceptions of ICU leaders regarding the current challenges associated with delirium recognition and management and the potential benefits of continuous delirium monitoring? STUDY DESIGN AND METHODSWe conducted a blinded, cross-sectional, electronic survey of ICU leaders across the U.S., including physician directors and nursing managers with [≥]3 years of ICU leadership experience. We asked about perceptions of the effectiveness of current delirium clinical assessment tools, current delirium detection and management challenges, and how an objective, continuous delirium monitoring system might impact clinician practice and patient outcomes in their ICU. RESULTSAmong the 81 respondents (62 physicians, 19 nurses), most (76%) reported that recommended delirium assessment tools (CAM-ICU, ICDSC) are used in their ICUs, though there were mixed perceptions on how reliably they are conducted. A majority (63-90%) perceived that current bedside assessments delay and limit the recognition of ICU delirium. Nearly all (89%) agreed an objective delirium monitoring tool would be more clinically valuable than current delirium assessment tools and that it would support real-time, delirium management by clinicians. CONCLUSIONSICU leaders perceive that there are limitations to using clinical delirium assessment tools in ICU patients to effectively detect and manage ICU delirium. Most felt that an objective delirium monitor could facilitate delirium detection and potentially expedite appropriate delirium management in patients.
Born, G.
Show abstract
BackgroundQuality measurement in intensive care emphasizes task completion--whether assessments were documented and protocols followed. Electronic health record (EHR) systems capture these signals in real time, yet current metrics cannot distinguish task completion from cognitive clinical engagement. A prior analysis demonstrated that omission of orientation assessment predicted a 4.29-fold increase in hospital mortality among low-acuity ICU patients [1]. Whether combining this marker with routine task-completion data yields a computable phenotype with independent prognostic value has not been studied. ObjectiveTo define, validate, and characterize "discordant care"--a computable EHR phenotype defined as completion of [≥]6 of 8 routine nursing assessments without orientation assessment documentation--as a predictor of hospital mortality, distinguishing patient-level confounding from care process signal. MethodsRetrospective cohort study using MIMIC-IV v3.1 (2008-2022), including 46,004 adult ICU stays with SOFA scores 0-2 and length of stay [≥]24 hours in non-neurological ICUs. Primary exposure: discordant care, computed from structured nursing flowsheet data within 24 hours of admission. Primary outcome: hospital mortality. Progressive covariate adjustment included mechanical ventilation, sedation, and diagnosis. ResultsDiscordant care was present in 8891 patients (19.3%), with 69.7% mechanically ventilated versus 25.3% of concordant patients. Two overlapping signals were identified: a patient-level signal driven by ventilation/sedation (full adjustment OR 1.19, 95% CI 1.09-1.30) and a care process signal in non-ventilated patients (OR 2.14, 1.87-2.44; N=30,314). Among non-ventilated SOFA 0 patients, OR was 2.60 (2.13-3.18; N=16,295). The signal was present across all 7 major diagnosis categories. Quantitative bias analysis indicated unmeasured delirium could attenuate but likely not fully explain the non-ventilated signal. ConclusionsDiscordant care identifies two phenomena: a patient-level signal from ventilation/sedation and a care process signal where assessable patients receive routine care without cognitive engagement (OR 2.14-2.60). This care process signal is invisible to existing quality metrics and detectable in real time. Prospective validation with systematic delirium screening is needed.
Silva-Passadouro, B.; Khoja, O.; Casson, A. J.; Delis, I.; Brown, C.; Sivan, M.
Show abstract
New-onset chronic pain is a common and debilitating symptom of Long COVID (LC) that remains not fully understood in terms of pathophysiology and therapeutic targets. A growing body of evidence in chronic pain syndromes similar to LC demonstrates an association between EEG alpha oscillatory activity and the experience of pain, with clinical studies showing maladaptive changes in oscillatory activity, particularly a slowing of alpha activity. This study aims to investigate the association between EEG alpha oscillatory activity and pain perception in new-onset LC-chronic pain. We recruited 31 individuals (20 females) with a clinical diagnosis of LC reporting new-onset chronic pain and 31 healthy pain-free age-and sex-matched controls. Participants completed questionnaires regarding symptoms and psychological functioning prior to recording eyes-open resting-state EEG. Peak alpha frequency (PAF) and spectral power within the alpha band (8-13 Hz) were extracted from EEG signals. Lower PAF over the posterior scalp region was significantly associated with higher LC-chronic pain severity when controlling for age and depression. This observation was consistent across PAF estimation methods. PAF was significantly increased, particularly in the posterior region, in the moderate pain LC subgroup compared to both severe pain subgroup and controls, while alpha power did not differ between the three groups and was not associated with pain severity. Our findings highlight associations between PAF and pain symptoms in a new post-infection chronic pain syndrome. PAF can thus be explored as a potential biomarker and therapeutic target for EEG-based neuromodulation interventions in LC-chronic pain. These results may have implications for other similar chronic pain syndromes. SummaryLower resting-state EEG peak alpha frequency in posterior scalp region is associated with higher severity of new-onset Long COVID chronic pain.
Ottenhof, M. M. J.
Show abstract
BackgroundThe FACE-Q Skin Cancer Module is a validated patient-reported outcome measure for facial skin cancer surgery. However, the minimally important difference (MID)-- the smallest change in score perceived as meaningful by patients--has not been established. Without the MID, individual score changes cannot be interpreted clinically. This study aimed to determine the MID for all four FACE-Q Skin Cancer scales. MethodsProspective cohort study at a tertiary center (2017-2018). Patients completed the FACE-Q preoperatively and at 1 week, 3 months, and 1 year postoperatively. MID was estimated using distribution-based methods (0.5 standard deviation, standard error of measurement) and an anchor-based approach using the FACE-Q Adverse Effects scale as an implicit anchor. Internal consistency (Cronbachs ), effect sizes, and standardized response means were calculated. ResultsOf 287 enrolled patients, 111 had paired baseline-three-month data. All scales had strong internal consistency ( = 0.82-0.93). Cancer worry showed the largest improvement from baseline to three months (mean change -3.1 {+/-} 5.8; SRM = -0.54; p < 0.001). When we combined the estimates, the MID values (sum scores / 0-100 scale) were: Appearance Satisfaction 2.0 / 5.6, Psychosocial Distress 2.0 / 6.2, Cancer Worry 2.5 / 6.2, and Scar Satisfaction 2.0 / 6.2. Anchor-based estimates for the Scar scale (2.4 sum points) confirmed distribution-based findings. ConclusionsThis study establishes the first MID values for the FACE-Q Skin Cancer Module. A change of approximately 2-2.5 sum points (5-6 points on a 0-100 scale) represents a minimally important difference across all scales. These thresholds enable clinicians and researchers to interpret individual FACE-Q score changes and design adequately powered clinical trials.
Karlsen, A. P. H.; Olsen, M. H.; Barfod, K. W.; Lunn, T. H.; Bitsch, M. S.; Wiberg, S. C.; Laigaard, J. H.
Show abstract
IntroductionPatients undergoing anterior cruciate ligament (ACL) reconstruction experience substantial postoperative pain, which delays recovery and leads to both immediate and long-term opioid use. In other knee procedures, infiltration between the popliteal artery and the capsule of the posterior knee (IPACK) has demonstrated analgesic and opioid reducing effects. However, the effect in patients undergoing ACL reconstruction has not been investigated. We aimed to investigate the real-world effect of IPACK in patients undergoing ACL reconstruction on immediate postoperative opioid consumption. ParticipantsIn this single-centre difference-in-differences cohort study, all patients who underwent ACL reconstruction surgery at Bispebjerg Hospital, Denmark, from 1 February 2024 to 30 June 2025 are included. The study further includes a similar reference cohort, comprising all patients who underwent trochleaplasty, Elmslie-Trillat, or medial patellofemoral ligament reconstruction during the same period, and at the same hospital. InterventionThe primary exposure is the implementation of IPACK as part of perioperative management for ACL reconstruction on 1 January 2025. The IPACK was performed under ultrasound guidance, immediately before surgery, administering 20 mL of ropivacaine 0.5% between the popliteal artery and the posterior knee capsule. OutcomesThe primary outcome is the cumulative opioid consumption from surgical incision to 2 hours postoperatively. Secondary outcomes include the cumulative opioid consumption from incision to 24 hours postoperatively, the worst reported pain score at 0-24h postoperatively, occurrence of postoperative nausea or vomiting (PONV) 0-24h postoperatively, length of PACU stay, length of hospital stay, and nerve injuries. As an exploratory outcome, carbon dioxide emissions will be investigated. Statistical analysisThe main analysis will be a standard two-way fixed effects DiD regression assessing the changes occurring at the time of implementation of IPACK in the ACL cohort, with adjustment for the underlying time trend. Continuous outcomes are reported as mean difference (95% confidence interval [CI]), and binary outcomes as absolute and relative risks (95% CI).
Tjepkema-Cloostermans, M. C.; Beishuizen, A.; Strang, A. C.; Keijzer, H. M.; Telleman, J. A.; Smook, S. P.; Vermeijden, J. W.; Hofmeijer, J.; van Putten, M. J. A. M.
Show abstract
ObjectiveDespite substantial variability in the severity of post-anoxic encephalopathy, all comatose patients after cardiac arrest are usually treated according to the same standardized intensive care protocol, including sedation, mechanical ventilation, and targeted temperature management (TTM). We hypothesize that patients with a favourable EEG pattern (continuous EEG within 12 hours after cardiac arrest) may not benefit from prolonged sedation and TTM. We studied the feasibility and safety of early cessation of sedation and TTM in this subgroup. MethodsWe conducted a non-randomized, controlled intervention study including 40 adult patients admitted to the ICU with postanoxic encephalopathy after cardiac arrest and an early (< 12 hours) favourable EEG pattern. The control group received standard care with sedation and TTM for at least 24-48 hours, whereas the intervention group underwent early cessation of sedation and TTM as soon as possible after establishing a favourable EEG, followed by weaning from mechanical ventilation. The primary outcome was duration of mechanical ventilation. Secondary outcomes included ICU length of stay, total sedation time, number of ICU complications, and neurological outcomes at 3 and 6 months. ResultsDuration of mechanical ventilation was significantly shorter in the intervention than in the control group (median 12 vs 28 h, p < 0.001). Median ICU length of stay and median total sedation time were also reduced by more than 50% in the intervention group, from respectively 2.5 to 1.2 days (p = 0.001) and 27 to 12 h (p < 0.001). There was no increase in ICU complications in the intervention group. No statistically significant differences in neurological outcomes at 3 or 6 months were observed. ConclusionEarly withdrawal of sedation is feasible and safe in patients with an early favourable EEG following cardiac arrest. The study was underpowered to detect possible differences in long-term neurological recovery. SignificanceShortening sedation and mechanical ventilation is likely to result in direct reductions in healthcare costs and contribute to more appropriate care. Larger studies are needed to evaluate the impact on long-term neurological outcomes.
Ajay, E. A.; Khan, F.; Bhattacharjee, A.; Pickering, A. E.; Dunham, J. P.
Show abstract
IntroductionChronic pain in fibromyalgia may be driven by abnormal ongoing activity in a subclass of C-fibre nociceptors known as Type1B or CMi nociceptors. As is common in C-nociceptor microneurography studies, the modest patient numbers in these prior studies generate large confidence intervals around the point estimate of the prevalence of this abnormal activity. This complicates the interpretation of the relative importance of this ongoing nociceptor activity as a pain generating mechanism in fibromyalgia. The study aims to improve precision via an adaptive Bayesian protocol that maximises the yield and quality of data collection whilst minimising patient burden. MethodsThe study employs an optimised microneurography protocol with an adaptive study design. The microneurography protocol incorporates early identification of CMi nociceptors via an abbreviated activity dependent slowing protocol to increase yields enabling efficient collection of the primary outcome data. The adaptive study design will use Bayesian principles to iteratively assess the predictive probability of futility, and terminate early if there is high confidence that the hypothesis is false. Furthermore, the study will employ questionnaires to explore links with pain in the area under study to the electrophysiology data. Finally, quantitative sensory testing will be used to investigate whether the irritable nociceptor phenotype is associated with abnormalities in CMi nociceptor physiology. Ethics & DisseminationThis study has received HRA REC approval in the UK. Participants will provide written informed consent, and may withdraw at any time without consequence. At the end of the study, the results will be disseminated through peer-reviewed publication, and the data made available via a data repository. Strengths & limitations of this studyBayesian predictive probability of futility to minimise patient burden in microneurography Microneurography for objective interrogation of the peripheral nervous system Optimised microneurography protocol to efficiently answer primary hypotheses Subjective elements of early termination criteria of the study assessed and co-developed with Patient and Public Inclusion and Engagement Group
Delbari, P.; Pourahmad, R.; Zare, A. h.; Sabet, S.; Ahmadvand, M. H.; rasouli, K.; Jakobs, M.
Show abstract
BackgroundPersistent Spinal Pain Syndrome (PSPS) type II represents a challenging clinical entity with limited therapeutic options. Various spinal cord stimulation (SCS) modalities have emerged as potential treatments, but their comparative effectiveness remains unclear. ObjectiveOur goal in this paper is to systematically evaluate and compare the efficacy of different SCS modalities in patients with PSPS type II through meta-analysis of available randomized controlled trials. Evidence ReviewWe conducted a systematic review following PRISMA guidelines, searching major databases for randomized controlled trials evaluating SCS modalities in PSPS type II patients until the end of May 2025(search updated on October 3rd). Primary outcomes included pain intensity (VAS) and functional disability (ODI) at 6 and 12 months. Subgroup analyses compared tonic versus burst stimulation and high-frequency versus low-frequency SCS. FindingsNine randomized controlled trials were included, encompassing 565 patients across different SCS modalities. For the primary outcome of clinically meaningful pain relief ([≥]50% reduction), pooled analysis demonstrated that 45% (95% CI: 18-75%, I{superscript 2} = 92.2%) of patients achieved this threshold for back pain and 55% (95% CI: 45-65%, I{superscript 2} = 0%) for leg pain. Subgroup analysis revealed significant differences in back pain responder rates by stimulation modality: High-frequency SCS demonstrated responder rates of 92% (95% CI: 79-98%) versus 28% (95% CI: 13-49%) for conventional frequencies (p < 0.001). For leg pain, no significant difference was observed between tonic (51%, 95% CI: 37-65%) and burst stimulation (60%, 95% CI: 45-74%, p = 0.36) and mean VAS scores demonstrated significantly lower pain with high-frequency SCS (13.30, 95% CI: 8.82-17.78) compared to conventional frequency (28.42, 95% CI: 24.02-32.88, p<0.0001). For back pain, mean VAS scores decreased from a baseline of 73.03 to 41.67 (95% CI: 36.12-47.22, I{superscript 2}=22.8%) at 6 months and remained stable at 35.66 (95% CI: 25.39-45.93, I{superscript 2}=75.0%) at 12 months. Leg pain showed more pronounced improvement, with VAS scores declining from a baseline of 61.81 to 23.75 (95% CI: 17.69-29.81, I{superscript 2}=78.8%) at 6 months and 29.16 (95% CI: 24.81-33.52, I{superscript 2}=0%) at 12 months). Meta-regression identified longer pain duration and older age as positive predictors of response, while higher baseline leg pain predicted lower responder rates. Serious adverse events occurred in 10%, with a 16% revision surgery rate. Only two studies demonstrated a low risk of bias across all domains. ConclusionsCurrent evidence demonstrates that various SCS modalities provide clinically meaningful pain relief in PSPS type II patients, with approximately half achieving [≥]50% pain reduction. High-frequency SCS shows significantly superior responder rates for back pain compared to conventional tonic stimulation, while burst stimulation yields significantly superior reductions in continuous pain intensity metrics. However, the limited number of studies, substantial heterogeneity, and lack of head-to-head comparisons prevent definitive recommendations regarding optimal stimulation parameters. Future large-scale randomized trials with standardized protocols and responder-based outcomes are needed to establish evidence-based treatment algorithms for PSPS type II patients.
Kang, C.-Q.; Chen, L.-P.; Wang, Y.-X.
Show abstract
BackgroundEarly laparoscopic cholecystectomy (ELC) is the standard treatment for acute calculous cholecystitis (ACC), but difficult laparoscopic cholecystectomy (DLC) remains a challenge. Predicting DLC and ACC severity is crucial for clinical decision-making. MethodsThis retrospective single-center study included 198 ACC patients who underwent ELC. Preoperative clinical, laboratory, and imaging data were analyzed. DLC was defined by operative time >90 min, conversion, or subtotal cholecystectomy. ACC severity was graded using TG18. Multivariate logistic regression identified independent predictors. ResultsDLC occurred in 81 (40.9%) patients; 102 (51.5%) had severe ACC. Serum cholinesterase (ChE) and CRP were independent predictors of DLC. CRP and male sex independently predicted ACC severity. Other markers (e.g., NLR, PCT) were not independently associated. ConclusionPreoperative ChE and CRP levels are reliable predictors of DLC, while CRP and male sex predict ACC severity. These findings support their use in risk stratification and surgical planning.
Born, G.
Show abstract
ObjectiveTo develop and validate a predictive model incorporating behavioral telemetry signals--documentation pattern anomalies derived from routine EHR charting--alongside clinical variables for ICU mortality prediction in patients with low acute physiologic derangement. Materials and MethodsRetrospective cohort study of 46,002 adult ICU stays from MIMIC-IV v3.1 (2008-2022) with SOFA scores 0-2, excluding neurological units. We extracted 66 variables spanning demographics, acuity, behavioral telemetry, clinical enrichment, and temporal factors. Progressive logistic regression models (M1-M7) were compared using cross-validation, DeLong tests, net reclassification improvement, and calibration analysis. ResultsOverall mortality was 9.34% (4,295 deaths). The clinical model (M5) achieved cross-validated AUROC 0.691 versus 0.639 for demographics alone (M2; {Delta}AUROC = 0.052, DeLong p = 4.41x10-47). NRI was 24.3%. Discordant care patients received 30.5% more chart events than concordant patients, with the sole deficit in neurological assessments (-15.4%), refuting the neglect hypothesis. Kaplan-Meier analysis confirmed survival separation (log-rank {chi}2 = 138.6, p = 5.32x10-32). In the most conservative subgroup (SOFA 0, no sedation, no ventilation, N = 11,158), orientation omission remained associated with mortality (adjusted OR 1.52, p = 0.027). DiscussionDeep sedation and mechanical ventilation function as mediators on the causal pathway rather than traditional confounders; the discordant care signal retains significance after full sedation adjustment. ConclusionDocumentation pattern analysis adds measurable predictive value for ICU mortality risk stratification and represents a novel signal for real-time EHR-based clinical decision support.
Rabienia Haratbar, S.; Hamedi, F.; Mohtasebi, M.; Chen, L.; Wong, L.; Yu, G.; Chen, L.
Show abstract
SignificanceMastectomy skin flap necrosis remains a major complication in implant-based breast reconstruction due to inadequate tissue blood flow. Existing diagnostic technologies are limited by shallow depth sensitivity, dye-related risks, contact requirements, and an inability to continuously assess blood flow. AimThis study aimed to translate a noncontact, dye-free, depth-sensitive speckle contrast diffuse correlation tomography (scDCT) technique to a clinically relevant porcine skin flap model for assessing flap blood flow and viability. ApproachThe scDCT system was optimized to image blood flow over seven days in four porcine skin flaps including Sham (SH), Implant (IM), Half Necrosis (HN), and Full Necrosis (FN). Measurements were compared with indocyanine green angiography (ICG-A) as a reference standard. ResultsscDCT enabled longitudinal monitoring of flap blood flow, revealing significant flow differences among flap types and over time. FN flaps consistently exhibited the most severe flow impairment, while other flap types showed partial or complete recovery over time, distinguishing nonviable from viable tissue. scDCT measurements demonstrated moderate to strong correlations with ICG-A across time points. ConclusionsThe findings support scDCT as a promising perioperative imaging modality for improving flap necrosis risk stratification and surgical decision-making, with future work focused on large-scale validation and clinical translation.
Andriazzi, V. H.; Curcio, R. P.; Novais, M. A. R. A.; Fernandes, B. L. G.; Rosa, G. C.; Vasconcelos, J. G. S.; Quineper, J. N.
Show abstract
ObjectiveTo compare the efficacy and safety of etomidate versus ketamine as induction agents for rapid sequence intubation in critically ill adults, focusing on 28-day mortality and post-intubation hypotension. Data SourcesPubMed, Embase, and the Cochrane Library were systematically searched from inception to January 2026. Reference lists of included studies were also manually screened. Study SelectionWe included randomized controlled trials (RCTs) comparing single-dose intravenous ketamine versus etomidate for emergency rapid sequence intubation in critically ill adults ([≥] 18 years) in non-operating room settings (e.g., intensive care unit or emergency department). Data ExtractionTwo investigators independently screened records, extracted data using a standardized form and assessed the risk of bias using the RoB 2 tool. The certainty of evidence was evaluated using the GRADE framework. Data SynthesisSix RCTs comprising 4,108 patients (2,046 assigned to ketamine and 2,062 to etomidate) were included. The pooled analysis showed no statistically significant difference in 28-day mortality between the ketamine and etomidate groups (39.0% vs. 40.3%; relative risk [RR] 0.96; 95% CI, 0.89-1.03; p=0.29; I{superscript 2}=11%). In a prespecified subgroup analysis of patients with sepsis (n=1,546), mortality also did not differ significantly (RR 0.94; 95% CI, 0.86-1.03). However, ketamine was associated with a statistically significant increase in the incidence of post-intubation hypotension (14.2% vs. 11.3%; RR 1.25; 95% CI, 1.01-1.53; p=0.04; I{superscript 2}=0%). No significant differences were observed regarding peri-intubation cardiac arrest, first-attempt intubation success, or ventilator- and intensive care unit-free days. ConclusionsThere is no statistical difference in 28-day mortality between etomidate and ketamine for emergency intubation in critically ill adults, including those with sepsis. The higher incidence of post-intubation hypotension with ketamine suggests etomidate presents a more favorable hemodynamic safety profile in this setting. Key pointsO_ST_ABSQuestionC_ST_ABSDoes the choice between etomidate and ketamine for emergency intubation in critically ill patients impact 28-day mortality? FindingsIn this systematic review and meta-analysis of randomized controlled trials, there was no statistically significant difference in 28-day mortality between patients induced with ketamine (39.0%) and those induced with etomidate (40.3%). MeaningThe use of etomidate versus ketamine for rapid sequence intubation does not alter 28-day mortality, indicating that the choice of induction agent should be individualized.